Dystopic - Iron Dome for America and The Great AI Upheaval


February 1, 2025

Dystopic Newsletter

Iron Dome for America & the Great AI Upheaval

The Proliferated Warfighter Space Architecture – a part of "Iron Dome for America"

On January 27th, President Trump signed an executive order calling for a massive upgrade and extension of U.S. missile defenses, which he referred to as the "Iron Dome for America." The new program is focused on the development of defenses against hypersonic weapons and other advanced aerial threats. The executive order directs the Department of Defense to pursue space-based interceptors reminiscent of Ronald Reagan's Star Wars "Brilliant Pebbles" as part of the proposed system architecture.

Defense analysts voiced concern that the Iron Dome for America could lead to a new arms race and further provoke Russia to carry out Putin's threat to place nuclear weapons in orbit. Despite these concerns, the President ordered the DoD to have an architecture proposal ready in 60 days for review.

The U.S. currently deploys a limited missile defense of the homeland, the Ground-Based Midcourse Defense (GMD). Initial deployment of the GMD system began in 2004, two years after leaving the ABM treaty. Missile batteries were placed at Fort Greely, Alaska, and Vandenburg Air Force Base, California, ideal locations to counter an attack from China, Russia, or North Korea. GMD interceptors are designed to intercept ballistic missiles in mid-course and are the world's most sophisticated, longest-range antimissile weapons. Eight interceptors were initially deployed. A decade later, deployments reached a total of 44 interceptors. Today, 64 3rd generation GMD interceptor missiles are deployed. The GMD interceptors are the foundation of U.S. missile defense of the homeland.

The GDM is designed to defend against a limited or accidental ballistic missile attack. GDM is capable of handling an attack from a rogue nuclear state, like North Korea or a future nuclear-armed Iran. It is ineffective against hypersonic missiles or mass nuclear strikes that China or Russia could deliver.

There is the additional issue of detecting a hypersonic missile attack. The current U.S. early warning system was designed to detect and track ballistic missile threats of the Cold War era. While the current ground-based surveillance systems can detect hypersonic missiles, they provide a few minutes of warring as opposed to 20 or more minutes of early warning for a ballistic missile attack. The following diagram illustrates the hypersonic early warning detection problem.

Given the limitations of ground-based detection of hypervelocity weapons, the U.S. has shifted resources to expand a space-based detection capability. The current Space Based Infrared System (SBIRS), which detects missile launches, is being augmented with a constellation of low earth orbit satellites, an expanded National Defense Space Architecture tracking layer. The U.S. Space Development Agency (SDA) is developing tracking layer missile detection satellites. SDA awarded SpaceX the $1.8B contract for the StarShield satellite communications network. The new detection and tracking system called the Proliferated Warfighter Space Architecture (PWSA) has already started deployment. Within a few years, the U.S. will close the window of vulnerability on Hypersonic missile detection.

When we combine the Proliferated Warfighter Space Architecture (PWSA) space-based sensors with the proposed space-based segment of the "Iron Dome for America," the U.S. will possess a global early warning system with a global antimissile capability. A defense that will not only protect the U.S., it can defend all of our Allies as well.

Like the original Reagan-era Star Wars concept, this will be expensive and just as controversial. However, technological advances and the rapidly declining cost of placing satellites in orbit may make the proposed system a reality as opposed to a pipe dream. Time will tell.

Now, a short break for an update on my book and other activities

Final revision edits continue. This week involved final updates of my graphics and figures. I'm still on track for a late May release (hopefully). You can find out more about my book and book release updates at my Website HERE.

'll be giving a lecture at the Small Satellite Symposium in Mountain View, CA, next week on Wednesday, February 4th. The topic, not so coincidentally, is "Tech Brief – Technological Challenges of SAR." I'll also be leading a panel discussion titled "Discussion on AI Applications in Earth Observation."

You can find out more about the Small Satellite Symposium and this summer's European Small Satellite Symposium HERE.Now,

Now, back to Our Scheduled Dystopic …

The Great AI Upheaval

Andrew Grove, the founder of INTEL, warned his engineers that the competition could, at any time, disrupt their technology and their company. As he counseled:

Only the Paranoid Survive

Sam Altman, the CEO of OpenAI, should have heeded Andrew Grove's advice. Last week, OpenAI, Softbank, and Oracle were basking in the limelight with President Trump after announcing their $500 million joint investment over 5 years in AI infrastructure dubbed "Stargate." The commitment of $100m a year for AI development allows OpenAI to keep pace with Meta and Amazon, both of which are projecting nearly $100B a year capital spending rate for 2025 and beyond. Much of that $200B will be spent on computer centers and infrastructure.

Meta, OpenAI, Google, and other AI stalwarts were riding high in their perceived dominance after the Stargate announcement. Large Launage Model (LLM) AI requires big data centers using massive numbers of AI ASIC-equipped servers and a revitalized nuclear energy sector to provide the Geen Energy power it all. The extended AI ecosystem includes data centers, semiconductors, computer hardware, and the nuclear power sector. The companies behind these technologies were all experiencing rising stock valuations.

One week later, the success of a little-known Chinese startup, DeepSeek, completely upended the AI ecosystem. The Chinese upstart released an app whose tests showed it was on par with Open Ai's latest Chat GPT release. As if that was not stunning enough, DeepSeek's CEO, Liang Wenfeng, claimed that only $5.6M and 2,048 of Nvidia's performance-capped H800 GPU ASICs. By comparison, OpenAI has spent billions and uses tens of thousands of more power H100 and H200 Nvidia GPU ASICs.

Wall Street panicked. DeepSeek called into question the big AI models used: OpenAI, Meta, Amazon, and X. Stocks plummeted across the ecosystem. Nvidia stock lost half a trillion dollars in value in a single day, a 17% fall from its peak share price. Nuclear power stocks Constellation Energy, NuScale Power, and others were down 21% to 28%. Datacenter suppliers were particularly hard hit, with Vertiv Holdings seeing a 30% one-day loss.

DeepSeek had called into question the entire LLM, Large Language Model, AI that the U.S. AI giants have been pursuing…OR PERHAPS NOT

As Microsoft's Olivia Shone notes in her comparison of( large (LLM) and small (SLM) AI language models, Smaller models typically require less computational power, reducing costs. However, SLMs might not be well-suited for more complex tasks like analyzing code or generating documents. Larger models offer superior accuracy and versatility but come with higher infrastructure and operational expenses. LLMs are also the pathway for true machine intelligence and AI models that can reason like humans. You can access Shone's paper is available HERE

s it turns out, rumors of the demise of LLM and the necessary investments for LLM were premature. DeepSeek's claims of $5.6 Million and just 2,048 were too good to be true.

Within days, investigative reporters and AI experts across the globe began finding significant issues with DeepSeek's claims.

  • Industry analysts at Semi Analysis reported that DeepSeek's R1 AI model required over $500M in Nvidia silicon, not the "2000" older generation Nvidia chips initially claimed - Investment claims by DeepSeek appeared to be a fabrication
  • DeepSeek did not train its models from raw data collection. Instead, DeepSeek accessed OpenAI to train its AI model, a process called "distillation." Distillation, using LLM data to train an SLM, is a frowned-upon practice in the AI community. In fact, it is a violation of terms of use for OpenAI and other platforms.

Distillation is a shortcut in the massive data analysis used by OpenAI. Think of it as using a cheat sheet of the answer to a test rather than studying and having a broader understanding of the material when you take a test. An alternative analogy is reading a CliffsNotes study guide rather than reading the actual book. In either case, it is an intellectual shortcut.

Microsoft and Open AI have since banned all accounts related to DeepSeek.

So, DeekSeek was not all that had been claimed. In a few days, the U.S. AI ecosystem stocks recovered. However, it did leave some questions for the AI community, which we will discuss shortly.

Despite all media hype, both positive and negative, DeekSeek's R1 model embodied several innovative techniques that the Silicon Valley gurus of LLM should take note.

First, DeepSeek effectively used reduced complexity data optimizations. Tim Dettmer, a former META AI researcher, discovered that his work was cited in recently published DeepSeek research papers. His paper, 8-bit Optimizers via Block-wise Quantization, co-written with Mike Lewis, Sam Shleifer, and Luke Zettlemoyer, was originally published in October of 2021. The paper explained how AI learning computations could be greatly simplified using 8-bit versus 32-bit numbers yet retain most, if not all, of the 32-bit accuracy. You can find Dettmers' research paper HERE

Next, DeepSeek created a larger model by layering a set of SLMs to create an effective LLM. This layered "mixture of experts", with each SLM trained for a defined area of expertise. Small models reduce the complexity of AI learning and, by extension, reduce the hardware ( and number of AI chips) required for learning.

Finally, DeepSeek used the distillation technique mentioned previously. Again, this reduces the time and complexity of learning. However, I suspect there are fundamental issues in using a large AI model to train a subordinate smaller AI model. The largest problem is that errors in the first AI's learning will propagate forward when distillation is used. This is why U.S. research favors LLM learning from the ground up and broad incorporation of new learning rather than taking shortcuts.

The AI community, up to his point, has been open in sharing research and know-how. That will likely end. The U.S. had hoped to restrain China's AI efforts by restricting access to hardware and advanced ASICs. DeekSeek showed that creative use of limited resources could leapfrog better-resourced U.S. teams. The calls are already out to limit access to AI research as a matter of national security.

I doubt this will work. Leadership in technology is a global race. If anything, this week shocked the complacency of U.S. AI companies. It is time to get to work because your lead is short-lived and can and will be challenged - Only the Paranoid Survive!

Untill next week ...

Dystopic- The Technology Behind Today's News

Thank you for your readership and support. Please recommend Dystopic to friends and family who are interested, or just share this email. New Readers can sign up for Dystopic HERE


Follow Me on Social

Unsubscribe | Update your profile | 113 Cherry St #92768, Seattle, WA 98104-2205